186 research outputs found

    Binaural Cues for Distance and Direction of Nearby Sound Sources

    Full text link
    To a first-order approximation, binaural localization cues are ambiguous: a number of source locations give rise to nearly the same interaural differences. For sources more than a meter from the listener, binaural localization cues are approximately equal for any source on a cone centered on the interaural axis (i.e., the well-known "cones of confusion"). The current paper analyzes simple geometric approximations of a listener's head to gain insight into localization performance for sources near the listener. In particular, if the head is treated as a rigid, perfect sphere, interaural intensity differences (IIDs) can be broken down into two main components. One component is constant along the cone of confusion (and thus co varies with the interaural time difference, or ITD). The other component is roughly constant for a sphere centered on the interaural axis and depends only on the relative pathlengths from the source to the two ears. This second factor is only large enough to be perceptible when sources are within one or two meters of the listener. These results are not dramatically different if one assumes that the ears are separated by 160 degrees along the surface of the sphere (rather than diametrically opposite one another). Thus, for sources within a meter of the listener, binaural information should allow listeners to locate sources within a volume around a circle centered on the interaural axis, on a "doughnut of confusion." The volume of the doughnut of confusion increases dramatically with angle between source and the interaural axis, degenerating to the entire median plane in the limit.Air Force Office of Scientific Research (F49620-98-1-0108

    Detailed Description The Effects of Room Acoustics on Auditory Spatial Cues

    Get PDF
    Law of the first wavefron

    Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks

    Get PDF
    Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound), and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands

    An Investigation of the Effects of Categorization and Discrimination Training on Auditory Perceptual Space

    Full text link
    Psychophysical phenomena such as categorical perception and the perceptual magnet effect indicate that our auditory perceptual spaces are warped for some stimuli. This paper investigates the effects of two different kinds of training on auditory perceptual space. It is first shown that categorization training, in which subjects learn to identify stimuli within a particular frequency range as members of the same category, can lead to a decrease in sensitivity to stimuli in that category. This phenomenon is an example of acquired similarity and apparently has not been previously demonstrated for a category-relevant dimension. Discrimination training with the same set of stimuli was shown to have the opposite effect: subjects became more sensitive to differences in the stimuli presented during training. Further experiments investigated some of the conditions that are necessary to generate the acquired similarity found in the first experiment. The results of these experiments are used to evaluate two neural network models of the perceptual magnet effect. These models, in combination with our experimental results, are used to generate an experimentally testable hypothesis concerning changes in the brain's auditory maps under different training conditions.Alfred P. Sloan Foundation and the National institutes of Deafness and other Communication Disorders (R29 02852); Air Force Office of Scientific Research (F49620-98-1-0108

    Identifying where you are in a room: Sensitivity to room acoustics

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.In a spatial auditory display, reverberation provides a reliable cue for source distance, increases the subjective realism of the display, and improves the externalization of simulated sound sources. However, relatively little is known about perceptual sensitivity to differences in reverberation patterns or how precisely reverberation must be simulated in a spatial auditory display. This paper presents preliminary results of a study examining sensitivity to changes in listener location in a simulated room. Results suggest that monaural cues in the ear receiving the least direct-sound energy provide the most salient cues for identifying room location. However, many details in the reverberation pattern are not easily perceived. These results indicate that including reverberation from simplified room models may provide the benefits of reverberation without noticeably degrading the realism of the display

    Perceptual consequences of including reverberation in spatial auditory displays

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.This paper evaluates the perceptual consequences of including reverberation in spatial auditory displays for rapidly-varying signals (obstruent consonants). Preliminary results suggest that the effect of reverberation depends on both syllable position and reverberation characteristics. As many of the non-speech sounds in an auditory display share acoustic features with obstruent consonants, these results are important when designing spatial auditory displays for nonspeech signals as well

    Spatial Auditory Display: Comments on ShinnCunningham et al

    Get PDF
    ________________________________________________________________________ Spatial auditory displays have received a great deal of attention in the community investigating how to present information through sound. This short commentary discusses our 2001 ICAD paper (Shinn-Cunningham, Streeter, and Gyss), which explored whether it is possible to provide enhanced spatial auditory information in an auditory display. The discussion provides some historical context and discusses how work on representing information in spatial auditory displays has progressed over the last five years. HISTORICAL CONTEXT The next time you find yourself in a noisy, crowded environment like a cocktail party, plug one ear. Suddenly, your ability to sort out and understand the sounds in the environment collapses. This simple demonstration of the importance of spatial hearing to everyday behavior has motivated research in spatial auditory processing for decades. Perhaps unsurprisingly, spatial auditory displays have received a great deal of attention in the ICAD community. Sound source location is one stimulus attribute that can be easily manipulated; thus, spatial information can be used to represent arbitrary information in an auditory display. In addition to being used directly to encode data in an auditory display, spatial cues also are important in allowing a listener to focus attention on a source of interest when there are multiple sound sources competing for auditory attention . Although it is theoretically easy to produce accurate spatial cues in an auditory display, the signal processing required to render natural spatial cues in real time (and the amount of care required to render realistic cues) is prohibitive even with current technologies. Given both the important role that spatial auditory information can play in conveying acoustic information to a listener and the practical difficulties encountered when trying to include realistic spatial cues in a display, spatial auditory perception and technologies for rendering virtual auditory space have both been wellrepresented areas of research at every ICAD conference held to date (e.g., see Even with a good virtual auditory display, the amount of spatial auditory information that a listener can extract is limited compared to other senses. For instance, auditory localization accuracy is orders of magnitude worse than visual spatial resolution. The study reprinted here, originally reported at ICAD 2001, was motivated by a desire to increase the amount of spatial information a listener could extract from a virtual auditory display. The original idea was to see if spatial resolution could be improved in a virtual auditory display by emphasizing spatial acoustic cues. The questions we were interested in were: 1) Can listeners learn to accommodate a new mapping between exocentric location and acoustic cues, so that they do not mislocalize sounds after training? and 2) Do such remappings lead to improved spatial resolution, or is there some other factor limiting performance? RESEARCH PROCESS The reprinted study was designed to test a model that accounted for results from previous experiments investigating remapped spatial cues. The model predicted that spatial performance is restricted by central memory constraints, not by a low-level sensory limitation on spatial auditory resolution. However, the model failed for the experiments reported: listeners actually achieved better-than-normal spatial resolution following training with the remapped auditory cues (unlike in any previous studies). These results were encouraging on the one hand, as they suggested

    Accurate Sound Localization in Reverberant Environments Is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    Get PDF
    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener's ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments.National Institutes of Health (U.S.) (Grant R01 DC002258)National Institutes of Health (U.S.) (Grant R01 DC05778-02)core National Institutes of Health (U.S.) (Eaton Peabody Laboratory. (Core) Grant P30 DC005209)National Institutes of Health (U.S.) (Grant T32 DC0003
    • …
    corecore